-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
feat: add new GPT-4.1 model variants to completion.go #966
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #966 +/- ##
===========================================
- Coverage 98.46% 85.41% -13.05%
===========================================
Files 24 43 +19
Lines 1364 2263 +899
===========================================
+ Hits 1343 1933 +590
- Misses 15 308 +293
- Partials 6 22 +16 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@sashabaranov please review 🙏 |
Thank you! |
GPT4Dot1Mini: true, | ||
GPT4Dot1Mini20250414: true, | ||
GPT4Dot1Nano: true, | ||
GPT4Dot1Nano20250414: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should flip this mapping to an inverse one, most of the models are not enabled for completion endpoint!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request adds support for new GPT-4.1 series model variants by declaring new constants and updating the disabled models map in completion.go, along with adding corresponding tests in completion_test.go to ensure the completion endpoint does not support these models.
- Added new GPT-4.1 series model constants in completion.go.
- Updated the disabledModelsForEndpoints map to disable the new GPT-4.1 models.
- Introduced tests in completion_test.go to verify that requests using GPT-4.1 variants return the expected error.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
File | Description |
---|---|
completion.go | New model constants and updates to the disabled models map. |
completion_test.go | Added tests to verify that GPT-4.1 series and related models are unsupported. |
t.Run(model, func(t *testing.T) { | ||
_, err := client.CreateCompletion( | ||
context.Background(), | ||
openai.CompletionRequest{ | ||
MaxTokens: 5, | ||
Model: model, | ||
}, | ||
) | ||
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) { | ||
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using t.Run inside a loop, the loop variable 'model' may be captured by reference, which could lead to unexpected behavior. Consider assigning the loop variable to a local variable (e.g., 'm := model') before invoking t.Run.
t.Run(model, func(t *testing.T) { | |
_, err := client.CreateCompletion( | |
context.Background(), | |
openai.CompletionRequest{ | |
MaxTokens: 5, | |
Model: model, | |
}, | |
) | |
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) { | |
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err) | |
m := model // Create a new local variable to capture the current value of model | |
t.Run(m, func(t *testing.T) { | |
_, err := client.CreateCompletion( | |
context.Background(), | |
openai.CompletionRequest{ | |
MaxTokens: 5, | |
Model: m, | |
}, | |
) | |
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) { | |
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", m, err) |
Copilot uses AI. Check for mistakes.
t.Run(model, func(t *testing.T) { | ||
_, err := client.CreateCompletion( | ||
context.Background(), | ||
openai.CompletionRequest{ | ||
MaxTokens: 5, | ||
Model: model, | ||
}, | ||
) | ||
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) { | ||
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using t.Run inside a loop, the loop variable 'model' may be captured by reference, which could lead to unexpected behavior. Consider assigning the loop variable to a local variable (e.g., 'm := model') before invoking t.Run.
t.Run(model, func(t *testing.T) { | |
_, err := client.CreateCompletion( | |
context.Background(), | |
openai.CompletionRequest{ | |
MaxTokens: 5, | |
Model: model, | |
}, | |
) | |
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) { | |
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", model, err) | |
m := model // Assign loop variable to a local variable | |
t.Run(m, func(t *testing.T) { | |
_, err := client.CreateCompletion( | |
context.Background(), | |
openai.CompletionRequest{ | |
MaxTokens: 5, | |
Model: m, | |
}, | |
) | |
if !errors.Is(err, openai.ErrCompletionUnsupportedModel) { | |
t.Fatalf("CreateCompletion should return ErrCompletionUnsupportedModel for %s model, but returned: %v", m, err) |
Copilot uses AI. Check for mistakes.
I've added new GPT-4.1 series.